18 research outputs found

    The Score-Difference Flow for Implicit Generative Modeling

    Full text link
    Implicit generative modeling (IGM) aims to produce samples of synthetic data matching the characteristics of a target data distribution. Recent work (e.g. score-matching networks, diffusion models) has approached the IGM problem from the perspective of pushing synthetic source data toward the target distribution via dynamical perturbations or flows in the ambient space. In this direction, we present the score difference (SD) between arbitrary target and source distributions as a flow that optimally reduces the Kullback-Leibler divergence between them while also solving the Schroedinger bridge problem. We apply the SD flow to convenient proxy distributions, which are aligned if and only if the original distributions are aligned. We demonstrate the formal equivalence of this formulation to denoising diffusion models under certain conditions. We also show that the training of generative adversarial networks includes a hidden data-optimization sub-problem, which induces the SD flow under certain choices of loss function when the discriminator is optimal. As a result, the SD flow provides a theoretical link between model classes that individually address the three challenges of the "generative modeling trilemma" -- high sample quality, mode coverage, and fast sampling -- thereby setting the stage for a unified approach.Comment: 25 pages, 5 figures, 4 tables. To appear in Transactions on Machine Learning Research (TMLR

    Exploring the Relationships between Hemoglobin, the Endothelium and Vascular Health in Patients with Chronic Kidney Disease

    Get PDF
    Background/Aims: The ideal hemoglobin target in chronic kidney disease remains unknown. Ultimately, individualized targets may depend upon the properties of the patient’s endothelial and vascular milieu, and thus the complex relationships between these factors need to be further explored. Methods: Forty-six patients with a glomerular filtration rate (GFR) 2 or on renal replacement therapy underwent measurement of hemoglobin, endothelial microparticles (EMPs) and aortic pulse wave velocity (PWV) at 0, 3 and 6 months. In addition, a number of inflammatory, cardiac and vascular biomarkers were measured at baseline. Results: No correlation was observed between baseline values of PWV and EMPs, PWV and hemoglobin, or hemoglobin and EMPs in the overall cohort. When stratified by CKD status, a positive correlation was observed between PWV and EMP CD41–/CD144+ in patients with GFR 2 only (r = 0.54, p = 0.01). Asymmetric dimethylarginine correlated with baseline PWV (r = 0.27, p = 0.07), and remained significantly correlated with the 3- and 6-month PWV measurement. Conclusions: In this small heterogeneous cohort of dialysis and non-dialysis patients, we were unable to describe a physiologic link between anemia, endothelial dysfunction and arterial stiffness

    Examination of optimizing information flow in networks

    Get PDF
    The central role of the Internet and the World-Wide-Web in global communications has refocused much attention on problems involving optimizing information flow through networks. The most basic formulation of the question is called the "max flow" optimization problem: given a set of channels with prescribed capacities that connect a set of nodes in a network, how should the materials or information be distributed among the various routes to maximize the total flow rate from the source to the destination. Theory in linear programming has been well developed to solve the classic max flow problem. Modern contexts have demanded the examination of more complicated variations of the max flow problem to take new factors or constraints into consideration; these changes lead to more difficult problems where linear programming is insufficient. In the workshop we examined models for information flow on networks that considered trade-offs between the overall network utility (or flow rate) and path diversity to ensure balanced usage of all parts of the network (and to ensure stability and robustness against local disruptions in parts of the network). While the linear programming solution of the basic max flow problem cannot handle the current problem, the approaches primal/dual formulation for describing the constrained optimization problem can be applied to the current generation of problems, called network utility maximization (NUM) problems. In particular, primal/dual formulations have been used extensively in studies of such networks. A key feature of the traffic-routing model we are considering is its formulation as an economic system, governed by principles of supply and demand. Considering channel capacities as a commodity of limited supply, we might suspect that a system that regulates traffic via a pricing scheme would assign prices to channels in a manner inversely proportional to their respective capacities. Once an appropriate network optimization problem has been formulated, it remains to solve the optimization problem; this will need to be done numerically, but the process can greatly benefit from simplifications and reductions that follow from analysis of the problem. Ideally the form of the numerical solution scheme can give insight on the design of a distributed algorithm for a Transmission Control Protocol (TCP) that can be directly implemented on the network. At the workshop we considered the optimization problems for two small prototype network topologies: the two-link network and the diamond network. These examples are small enough to be tractable during the workshop, but retain some of the key features relevant to larger networks (competing routes with different capacities from the source to the destination, and routes with overlapping channels, respectively). We have studied a gradient descent method for solving obtaining the optimal solution via the dual problem. The numerical method was implemented in MATLAB and further analysis of the dual problem and properties of the gradient method were carried out. Another thrust of the group's work was in direct simulations of information flow in these small networks via Monte Carlo simulations as a means of directly testing the efficiencies of various allocation strategies

    What About the Girls? Exploring the Gender Data Gap in Talent Development

    Get PDF
    Although there is an extensive literature about talent development, the lack of data pertaining to females is problematic. Indeed, the gender data gap can be seen in practically all domains including sport and exercise medicine. Evidence-based practice is the systematic reviewing of the best evidence in order to make informed choices about practice. Unfortunately, it may be that the data collected in sport is typically about male experiences, and not female; a rather unfortunate omission given that approximately half of the population is made up of women. When female athletes are underrepresented in research there are issues when making inferences about data collected in male dominated research domains to inform practice and policy for female athletes. In parallel, female sport participation is continually increasing worldwide. Recognizing the importance of evidence-based practice in driving policy and practice, and reflecting the gender data gap that is a consistent feature of (almost) all other domains, we were interested in examining whether a gender data gap exists in talent development research. The results suggest that a gender data gap exists in talent development research across all topics. Youth athlete development pathways may be failing to recognize the development requirements of females, particularly where female sports may be borrowing systems that are perceived to work for their male counterparts. In order to ensure robust evidence based practice in female youth sport there is a need to increase the visibility of female athletes in talent development literature

    Decision-tree analysis of control strategies

    No full text
    A major focus of research on visually guided action is the identification of control strategies that map optical information to actions. The traditional approach has been to test the behavioral predictions of a few hypothesized strategies against subject behavior in environments in which various manipulations of available information have been made. While important and compelling results have been achieved with these methods, they are potentially limited by small sets of hypotheses and the methods used to test them. In this study, we introduce a novel application of data-mining techniques in an analysis of experimental data that is able to both describe and model human behavior. This method permits the rapid testing of a wide range of possible control strategies using arbitrarily complex combinations of optical variables. Through the use of decision-tree techniques, subject data can be transformed into an easily interpretable, algorithmic form. This output can then be immediately incorporated into a working model of subject behavior. We tested the effectiveness of this method in identifying the optical information used by human subjects in a collision-avoidance task. Our results comport with published research on collision-avoidance control strategies while also providing additional insight not possible with traditional methods. Further, the modeling component of our method produces behavior that closely resembles that of the subjects upon whose data the models were based. Taken together, the findings demonstrate that data-mining techniques provide powerful new tools for analyzing human data and building models that can be applied to a wide range of perception-action tasks, even outside the visual-control setting we describe

    Disentangled Dynamic Representations from Unordered Data

    No full text
    We present a deep generative model that learns disentangled static and dynamic representations of data from unordered input. Our approach exploits regularities in sequential data that exist regardless of the order in which the data is viewed. The result of our factorized graphical model is a well-organized and coherent latent space for data dynamics. We demonstrate our method on several synthetic dynamic datasets and real video data featuring various facial expressions and head poses
    corecore